152 research outputs found

    Robustness of Stochastic Optimal Control to Approximate Diffusion Models under Several Cost Evaluation Criteria

    Full text link
    In control theory, typically a nominal model is assumed based on which an optimal control is designed and then applied to an actual (true) system. This gives rise to the problem of performance loss due to the mismatch between the true model and the assumed model. A robustness problem in this context is to show that the error due to the mismatch between a true model and an assumed model decreases to zero as the assumed model approaches the true model. We study this problem when the state dynamics of the system are governed by controlled diffusion processes. In particular, we will discuss continuity and robustness properties of finite horizon and infinite-horizon α\alpha-discounted/ergodic optimal control problems for a general class of non-degenerate controlled diffusion processes, as well as for optimal control up to an exit time. Under a general set of assumptions and a convergence criterion on the models, we first establish that the optimal value of the approximate model converges to the optimal value of the true model. We then establish that the error due to mismatch that occurs by application of a control policy, designed for an incorrectly estimated model, to a true model decreases to zero as the incorrect model approaches the true model. We will see that, compared to related results in the discrete-time setup, the continuous-time theory will let us utilize the strong regularity properties of solutions to optimality (HJB) equations, via the theory of uniformly elliptic PDEs, to arrive at strong continuity and robustness properties.Comment: 33 page

    Stochastic Stability of Event-triggered Anytime Control

    Full text link
    We investigate control of a non-linear process when communication and processing capabilities are limited. The sensor communicates with a controller node through an erasure channel which introduces i.i.d. packet dropouts. Processor availability for control is random and, at times, insufficient to calculate plant inputs. To make efficient use of communication and processing resources, the sensor only transmits when the plant state lies outside a bounded target set. Control calculations are triggered by the received data. If a plant state measurement is successfully received and while the processor is available for control, the algorithm recursively calculates a sequence of tentative plant inputs, which are stored in a buffer for potential future use. This safeguards for time-steps when the processor is unavailable for control. We derive sufficient conditions on system parameters for stochastic stability of the closed loop and illustrate performance gains through numerical studies.Comment: IEEE Transactions on Automatic Control, under revie

    Convergence of Finite Memory Q-Learning for POMDPs and Near Optimality of Learned Policies under Filter Stability

    Full text link
    In this paper, for POMDPs, we provide the convergence of a Q learning algorithm for control policies using a finite history of past observations and control actions, and, consequentially, we establish near optimality of such limit Q functions under explicit filter stability conditions. We present explicit error bounds relating the approximation error to the length of the finite history window. We establish the convergence of such Q-learning iterations under mild ergodicity assumptions on the state process during the exploration phase. We further show that the limit fixed point equation gives an optimal solution for an approximate belief-MDP. We then provide bounds on the performance of the policy obtained using the limit Q values compared to the performance of the optimal policy for the POMDP, where we also present explicit conditions using recent results on filter stability in controlled POMDPs. While there exist many experimental results, (i) the rigorous asymptotic convergence (to an approximate MDP value function) for such finite-memory Q-learning algorithms, and (ii) the near optimality with an explicit rate of convergence (in the memory size) are results that are new to the literature, to our knowledge.Comment: 32 pages, 12 figures. arXiv admin note: text overlap with arXiv:2010.0745

    Q-Learning for Continuous State and Action MDPs under Average Cost Criteria

    Full text link
    For infinite-horizon average-cost criterion problems, we present several approximation and reinforcement learning results for Markov Decision Processes with standard Borel spaces. Toward this end, (i) we first provide a discretization based approximation method for fully observed Markov Decision Processes (MDPs) with continuous spaces under average cost criteria, and we provide error bounds for the approximations when the dynamics are only weakly continuous under certain ergodicity assumptions. In particular, we relax the total variation condition given in prior work to weak continuity as well as Wasserstein continuity conditions. (ii) We provide synchronous and asynchronous Q-learning algorithms for continuous spaces via quantization, and establish their convergence. (iii) We show that the convergence is to the optimal Q values of the finite approximate models constructed via quantization. Our Q-learning convergence results and their convergence to near optimality are new for continuous spaces, and the proof method is new even for finite spaces, to our knowledge.Comment: 3 figure

    Nash and Stackelberg Equilibria for Dynamic Cheap Talk and Signaling Games

    Get PDF
    Simultaneous (Nash) and sequential (Stackelberg) equilibria of two-player dynamic quadratic cheap talk and signaling game problems are investigated under a perfect Bayesian formulation. For the dynamic scalar and multidimensional cheap talk, the Nash equilibrium cannot be fully revealing whereas the Stackelberg equilibrium is always fully revealing. Further, the final state Nash equilibria have to be essentially quantized when the source is scalar and has a density, and non-revealing for the multi-dimensional case. In the dynamic signaling game where the transmission of a Gauss-Markov source over a memoryless Gaussian channel is considered, affine policies constitute an invariant subspace under best response maps for both scalar and multi-dimensional sources under Nash equilibria; however, the Stackelberg equilibrium policies are always linear for scalar sources but may be non-linear for multi-dimensional sources. Further, under the Stackelberg setup, the conditions under which the equilibrium is non-informative are derived for scalar sources

    Dynamic signaling games with quadratic criteria under Nash and Stackelberg equilibria

    Get PDF
    This paper considers dynamic (multi-stage) signaling games involving an encoder and a decoder who have subjective models on the cost functions. We consider both Nash (simultaneous-move) and Stackelberg (leader-follower) equilibria of dynamic signaling games under quadratic criteria. For the multi-stage scalar cheap talk, we show that the final stage equilibrium is always quantized and under further conditions the equilibria for all time stages must be quantized. In contrast, the Stackelberg equilibria are always fully revealing. In the multi-stage signaling game where the transmission of a Gauss-Markov source over a memoryless Gaussian channel is considered, affine policies constitute an invariant subspace under best response maps for Nash equilibria; whereas the Stackelberg equilibria always admit linear policies for scalar sources but such policies may be nonlinear for multi-dimensional sources. We obtain an explicit recursion for optimal linear encoding policies for multi-dimensional sources, and derive conditions under which Stackelberg equilibria are informative. (C) 2020 Elsevier Ltd. All rights reserved
    corecore